神经网络可以表示和准确地重建静态3D场景的辐射场(例如,NERF)。有几种作品将这些功能扩展到用单眼视频捕获的动态场景,具有很有希望的性能。然而,已知单眼设置是一个受限制的问题,因此方法依赖于数据驱动的前导者来重建动态内容。我们用飞行时间(TOF)相机的测量来替换这些前沿,并根据连续波TOF相机的图像形成模型引入神经表示。我们而不是使用加工的深度映射,我们模拟了原始的TOF传感器测量,以改善重建质量,避免低反射区域,多路径干扰和传感器的明确深度范围的问题。我们表明,这种方法改善了动态场景重建对错误校准和大型运动的鲁棒性,并讨论了现在可在现代智能手机上提供的RGB + TOF传感器的好处和限制。
translated by 谷歌翻译
The ability for an agent to continuously learn new skills without catastrophically forgetting existing knowledge is of critical importance for the development of generally intelligent agents. Most methods devised to address this problem depend heavily on well-defined task boundaries, and thus depend on human supervision. Our task-agnostic method, Self-Activating Neural Ensembles (SANE), uses a modular architecture designed to avoid catastrophic forgetting without making any such assumptions. At the beginning of each trajectory, a module in the SANE ensemble is activated to determine the agent's next policy. During training, new modules are created as needed and only activated modules are updated to ensure that unused modules remain unchanged. This system enables our method to retain and leverage old skills, while growing and learning new ones. We demonstrate our approach on visually rich procedurally generated environments.
translated by 谷歌翻译
Diffusion models have achieved great success in modeling continuous data modalities such as images, audio, and video, but have seen limited use in discrete domains such as language. Recent attempts to adapt diffusion to language have presented diffusion as an alternative to autoregressive language generation. We instead view diffusion as a complementary method that can augment the generative capabilities of existing pre-trained language models. We demonstrate that continuous diffusion models can be learned in the latent space of a pre-trained encoder-decoder model, enabling us to sample continuous latent representations that can be decoded into natural language with the pre-trained decoder. We show that our latent diffusion models are more effective at sampling novel text from data distributions than a strong autoregressive baseline and also enable controllable generation.
translated by 谷歌翻译
The SNMMI Artificial Intelligence (SNMMI-AI) Summit, organized by the SNMMI AI Task Force, took place in Bethesda, MD on March 21-22, 2022. It brought together various community members and stakeholders from academia, healthcare, industry, patient representatives, and government (NIH, FDA), and considered various key themes to envision and facilitate a bright future for routine, trustworthy use of AI in nuclear medicine. In what follows, essential issues, challenges, controversies and findings emphasized in the meeting are summarized.
translated by 谷歌翻译
我们借鉴物理界的最新进步,提出了一种新的方法,以发现强化学习中物理系统的非线性动力学(RL)。我们确定该方法能够使用较少的轨迹(仅$ \ leq 30 $时间步骤)发现基础动力学,而不是最先进的模型学习算法。此外,该技术学习了一个足够准确的模型,可以诱导近乎最佳的策略,而轨迹明显少于无模型算法所要求的轨迹。它带来了基于模型的RL的好处,而无需提前开发模型,即具有基于物理动力的系统。为了确定该算法的有效性和适用性,我们对四个经典控制任务进行实验。我们发现,对基础系统的发现动力进行培训的最佳政策可以很好地概括。此外,当部署在实际物理系统上时,学到的策略表现良好,从而将模型桥接到实际系统差距中。我们将我们的方法与最新的基于模型和无模型的方法进行了比较,并表明我们的方法需要在真实的物理系统上比较其他方法所采样的轨迹更少。此外,我们探索了近似动力学模型,发现它们也可以表现良好。
translated by 谷歌翻译
Progress in continual reinforcement learning has been limited due to several barriers to entry: missing code, high compute requirements, and a lack of suitable benchmarks. In this work, we present CORA, a platform for Continual Reinforcement Learning Agents that provides benchmarks, baselines, and metrics in a single code package. The benchmarks we provide are designed to evaluate different aspects of the continual RL challenge, such as catastrophic forgetting, plasticity, ability to generalize, and sample-efficient learning. Three of the benchmarks utilize video game environments (Atari, Procgen, NetHack). The fourth benchmark, CHORES, consists of four different task sequences in a visually realistic home simulator, drawn from a diverse set of task and scene parameters. To compare continual RL methods on these benchmarks, we prepare three metrics in CORA: Continual Evaluation, Isolated Forgetting, and Zero-Shot Forward Transfer. Finally, CORA includes a set of performant, open-source baselines of existing algorithms for researchers to use and expand on. We release CORA and hope that the continual RL community can benefit from our contributions, to accelerate the development of new continual RL algorithms.
translated by 谷歌翻译
The field of artificial intelligence (AI), regarded as one of the most enigmatic areas of science, has witnessed exponential growth in the past decade including a remarkably wide array of applications, having already impacted our everyday lives. Advances in computing power and the design of sophisticated AI algorithms have enabled computers to outperform humans in a variety of tasks, especially in the areas of computer vision and speech recognition. Yet, AI's path has never been smooth, having essentially fallen apart twice in its lifetime ('winters' of AI), both after periods of popular success ('summers' of AI). We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present. In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'.
translated by 谷歌翻译
缺少价值估算对于现实世界数据科学工作流程至关重要。在线设置中的估算更加困难,因为它需要归纳方法本身能够随着时间的推移而发展。对于实际应用,估算算法应产生符合真实数据分布的避免,处理混合类型的数据,包括序数,布尔和连续变量,并缩放到大型数据集。在这项工作中,我们使用高斯Copula开发了一种新的在线估算算法,用于混合数据。在线高斯Copula模型符合所有Desiderata:其避免符合混合数据的数据分布,当流数据具有变化的分布时的准确性,以及速度(最多级)的精度上的离线对应物匹配。特别是在大规模的数据集上。通过将Copula模型拟合到在线数据,我们还提供了一种新方法,可以使用缺失值检测多变量依赖结构中的变化点。合成和现实世界数据的实验结果验证了所提出的方法的性能。
translated by 谷歌翻译